3 research outputs found
Regularising disparity estimation via multi task learning with structured light reconstruction
3D reconstruction is a useful tool for surgical planning and guidance.
However, the lack of available medical data stunts research and development in
this field, as supervised deep learning methods for accurate disparity
estimation rely heavily on large datasets containing ground truth information.
Alternative approaches to supervision have been explored, such as
self-supervision, which can reduce or remove entirely the need for ground
truth. However, no proposed alternatives have demonstrated performance
capabilities close to what would be expected from a supervised setup. This work
aims to alleviate this issue. In this paper, we investigate the learning of
structured light projections to enhance the development of direct disparity
estimation networks. We show for the first time that it is possible to
accurately learn the projection of structured light on a scene, implicitly
learning disparity. Secondly, we \textcolor{black}{explore the use of a multi
task learning (MTL) framework for the joint training of structured light and
disparity. We present results which show that MTL with structured light
improves disparity training; without increasing the number of model parameters.
Our MTL setup outperformed the single task learning (STL) network in every
validation test. Notably, in the medical generalisation test, the STL error was
1.4 times worse than that of the best MTL performance. The benefit of using MTL
is emphasised when the training data is limited.} A dataset containing
stereoscopic images, disparity maps and structured light projections on medical
phantoms and ex vivo tissue was created for evaluation together with virtual
scenes. This dataset will be made publicly available in the future
Identifying Visible Tissue in Intraoperative Ultrasound Images during Brain Surgery: A Method and Application
Intraoperative ultrasound scanning is a demanding visuotactile task. It
requires operators to simultaneously localise the ultrasound perspective and
manually perform slight adjustments to the pose of the probe, making sure not
to apply excessive force or breaking contact with the tissue, whilst also
characterising the visible tissue. In this paper, we propose a method for the
identification of the visible tissue, which enables the analysis of ultrasound
probe and tissue contact via the detection of acoustic shadow and construction
of confidence maps of the perceptual salience. Detailed validation with both in
vivo and phantom data is performed. First, we show that our technique is
capable of achieving state of the art acoustic shadow scan line classification
- with an average binary classification accuracy on unseen data of 0.87.
Second, we show that our framework for constructing confidence maps is able to
produce an ideal response to a probe's pose that is being oriented in and out
of optimality - achieving an average RMSE across five scans of 0.174. The
performance evaluation justifies the potential clinical value of the method
which can be used both to assist clinical training and optimise robot-assisted
ultrasound tissue scanning